-
Notifications
You must be signed in to change notification settings - Fork 71
issue/634: InfiniCore 支持InfiniLM Llama模型适配 #635
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Ceng23333 <[email protected]>
Signed-off-by: Ceng23333 <[email protected]>
Signed-off-by: Ceng23333 <[email protected]>
Signed-off-by: Ceng23333 <[email protected]>
| Runtime *getCpuRuntime(); | ||
|
|
||
| // Get runtime for a specific device (creates it if it doesn't exist) | ||
| Runtime *getRuntime(Device device); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个接口有哪里用过吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个之前调试加的接口,后面没用了
include/infinicore/nn/rope.hpp
Outdated
| * @brief RoPE algorithm type | ||
| * @brief Frequency generation method for RoPE cache | ||
| */ | ||
| enum class FreqGen { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为啥要有这个呢?都用Algo行不行?
| RoPE(size_t head_dim, | ||
| size_t max_seq_len, | ||
| double theta = 10000.0, | ||
| Algo freq_gen = Algo::GPT_J, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
所以这个地方又是为什么要两个呢?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rope如果有问题的话需要解决一下
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
问了gpt,hf确实是这样算的
Signed-off-by: Ceng23333 <[email protected]>
|
|
||
| target("infinicore_c_api") | ||
| set_kind("phony") | ||
| set_kind("shared") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
单独拆出来一个cpp api
| std::unique_ptr<MemoryAllocator> device_memory_allocator_; | ||
| std::unique_ptr<MemoryAllocator> pinned_host_memory_allocator_; | ||
| // Mutex to protect stream access for thread safety | ||
| mutable std::mutex stream_mutex_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不需要这个锁
| if (pinned_host_memory_allocator_) { | ||
| pinned_host_memory_allocator_.reset(); | ||
| // Wrap entire destructor in try-catch to prevent exceptions from causing segfaults | ||
| try { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
写的太复杂
| } | ||
|
|
||
| void Runtime::memcpyD2H(void *dst, const void *src, size_t size) { | ||
| SPDLOG_DEBUG("[RUNTIME] memcpyD2H: Called with runtime device: {}", device_.toString()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没必要做这么多检查
| } | ||
| if (this->device().getType() == src->device().getType()) { | ||
| op::rearrange_(Tensor(const_cast<TensorImpl *>(this)->shared_from_this()), src); | ||
| if (this->device().getType() == src->device().getType() && this->device().getIndex() == src->device().getIndex()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
检查太多了
#634
测例编译
xmake build infinicore-test测例执行
INFINICORE_LOG_LEVEL=info build/linux/x86_64/release/infinicore-test --nvidia --test all